To address the problems of poor effects, easily falling into suboptimal solutions, and inefficiency in neural network hyperparameter optimization, an Improved Real Coding Genetic Algorithm (IRCGA) based hyperparameter optimization algorithm for the neural network was proposed, which was named IRCGA-DNN (IRCGA for Deep Neural Network). Firstly, a real-coded form was used to represent the values of hyperparameters, which made the search space of hyperparameters more flexible. Then, a hierarchical proportional selection operator was introduced to enhance the diversity of the solution set. Finally, improved single-point crossover and variational operators were designed to explore the hyperparameter space more thoroughly and improve the efficiency and quality of the optimization algorithm, respectively. Two simulation datasets were used to show IRCGA’s performance in damage effectiveness prediction and convergence efficiency. The experimental results on two datasets indicate that, compared to GA-DNN(Genetic Algorithm for Deep Neural Network), the proposed algorithm reduces the convergence iterations by 8.7% and 13.6% individually, and the MSE (Mean Square Error) is not much different; compared to IGA-DNN(Improved Genetic Algorithm for Deep Neural Network), IRCGA-DNN achieves reductions of 22.2% and 13.6% in convergence iterations respectively. Experimental results show that the proposed algorithm is better in both convergence speed and prediction performance, and is suitable for hyperparametric optimization of neural networks.
In edge computing, computing resources are deployed at edge computing nodes closer to end users, and selecting the appropriate edge computing node deployment location from the candidate locations can enhance the node capacity and user Quality of Service (QoS) of edge computing services. However, there is less research on how to place edge computing nodes to reduce the cost of edge computing. In addition, there is no edge computing node deployment algorithm that can maximize the robustness of edge services while minimizing the deployment cost of edge computing nodes under the constraints of QoS factors such as the delay of edge services. To address the above issues, firstly, the edge computing node placement problem was transformed into a minimum dominating set problem with constraints by building a model about computing nodes, user transmission delay, and robustness. Then, the concept of overlapping domination was proposed, so that the network robustness was measured on the basis of overlapping domination, and an edge computing node placement algorithm based on overlapping domination was designed, namely CHAIN (edge server plaCement algoritHm based on overlAp domINation). Simulation results show that CHAIN can reduce the system latency by 50.54% and 50.13% compared to the coverage oriented approximate algorithm and base station oriented random algorithm, respectively.
The K-Means algorithms typically utilize Euclidean distance to calculate the similarity between data points when dealing with large-scale heterogeneous data. However, this method has problems of low efficiency and high computational complexity. Inspired by the significant advantage of Hamming distance in handling data similarity calculation, a Quantum K-Means Hamming (QKMH) algorithm was proposed to calculate similarity. First, the data was prepared and made into quantum state, and the quantum Hamming distance was used to calculate similarity between the points to be clustered and the K cluster centers. Then, the Grover’s minimum search algorithm was improved to find the cluster center closest to the points to be clustered. Finally, these steps were repeated until the designated number of iterations was reached or the clustering centers no longer changed. Based on the quantum simulation computing framework QisKit, the proposed algorithm was validated on the MNIST handwritten digit dataset and compared with various traditional and improved methods. Experimental results show that the F1 score of the QKMH algorithm is improved by 10 percentage points compared with that of the Manhattan distance-based quantum K-Means algorithm and by 4.6 percentage points compared with that of the latest optimized Euclidean distance-based quantum K-Means algorithm, and the time complexity of the QKMH algorithm is lower than those of the above comparison algorithms.
To address problems of low detection precision, coarse masks and weak generalization ability of the existing instance segmentation algorithms for occluded and blurred instances, an instance segmentation algorithm based on Fastformer and self-supervised contrastive learning was proposed. Firstly, in order to enhance the ability of algorithm to extract global information of feature maps, the Fastformer module based on additive attention was added after feature extraction network, and interrelationship between pixels in each layer of feature map was modeled deeply. Secondly, inspired by self-supervised learning, a self-supervised contrastive learning module was added to conduct self-supervised contrastive learning to instances in images to enhance the ability of algorithm to understand images, thereby improving segmentation results in environments with much noise interference. Experimental results show that the proposed algorithm has the mean Average Precision (mAP) improved by 3.1 and 2.5 percentage points respectively, compared to recently classical instance segmentation algorithm SOLOv2(Segmenting Objects by LOcations v2) on Cityscapes dataset and COCO2017 dataset. And a great balance is achieved between real-time performance and precision by the proposed algorithm, leading good robustness in segmentation instance of complex scenes.
Concerning the problems of the high cost of massive data storage and low efficiency of data traceability verification in the Internet of Things (IoT) system, a data trusted traceability method based on Merkel Mountain Range (MMR), named MMRBCV (Merkle Mountain Range BlockChain Verification), was proposed. Firstly, Inter-Planetary File System (IPFS) was used to realize the storage of the IoT data. Secondly, the consortium blockchains and private blockchains were adopted to design a double-blockchain structure to realize reliable recording of the data flow process. Finally, based on the MMR, a block structure was constructed to realize the rapid verification of lightweight IoT nodes in the process of data traceability. Experimental results show that MMRBCV reduces the amount of data downloaded during data tracing, and the data verification time is related to the structure of MMR. When MMR forms a perfect binary tree, the data verification time is short. When the block height is 200 000, MMRBCV’s maximum verification time is about 10 ms, which is about 72% shorter than that of Simplified Payment Verification (SPV) (about 36 ms), indicating that the proposed method improves the verification efficiency effectively.
A popularity prediction method of Twitter topics based on evolution patterns was proposed to address the problem that the differences between evolution patterns and the time?effectiveness of prediction were not taken into account in previous popularity prediction methods. Firstly, the K?SC (K?Spectral Centroid) algorithm was used to cluster the popularity sequences of a large number of historical topics, and 6 evolution patterns were obtained. Then, a Fully Connected Network (FCN) was trained as the prediction model by using historical topic data of each evolution pattern. Finally, in order to select the prediction model for the topic to be predicted, Amplitude?Alignment Dynamic Time Warping (AADTW) algorithm was proposed to calculate the similarity between the known popularity sequence of the topic to be predicted and each evolution pattern, and the prediction model of the evolution pattern with the highest similarity was selected to predict the popularity. In the task of predicting the popularity of the next 5 hours based on the known popularity of the first 20 hours, the Mean Absolute Percentage Error (MAPE) of the prediction results of the proposed method was reduced by 58.2% and 31.0% respectively, compared with those of the Auto?Regressive Integrated Moving Average (ARIMA) method and method using a single fully connected network. Experimental results show that the model group based on the evolution patterns can predict the popularity of Twitter topic more accurately than single model.
Aiming at the problems that current valve identification methods in industry have high missed rate of overlapping targets, low detection precision, poor target encapsulation degree and inaccurate positioning of circle center, a valve identification method based on double detection was proposed. Firstly, data enhancement was used to expand the samples in a lightweight way. Then, Spatial Pyramid Pooling (SPP) and Path Aggregation Network (PAN) were added on the basis of deep convolutional network. At the same time, the anchor boxes were adjusted and the loss function was improved to extract the valve prediction boxes. Finally, the Circle Hough Transform (CHT) method was used to secondarily identify the valves in the prediction boxes to accurately identify the valve regions. The proposed method was compared with the original You Only Look Once (YOLO)v3, YOLOv4, and the traditional CHT methods, and the detection results were evaluated by jointly using precision, recall and coincidence degree. Experimental results show that the average precision and recall of the proposed method reaches 97.1% and 94.4% respectively, 2.9 percentage points and 1.8 percentage points higher than those of the original YOLOv3 method respectively. In addition, the proposed method improves the target encapsulation degree and location accuracy of target center. The proposed method has the Intersection Over Union (IOU) between the corrected frame and the real frame reached 0.95, which is 0.05 higher than that of the traditional CHT method. The proposed method improves the success rate of target capture while improving the accuracy of model identification, and has certain practical value in practical applications.
In response to the issue of security and privacy-preserving in mobile cloud computing, an anonymous mechanism using cloud storage was proposed. Zero-knowledge proofs and the digital signature technology were introduced into anonymous registration to simplify the steps of key authentication, building upon which the third party was used to bind users and their identity certificates that avoid legitimate cloud services for malicious purposes. The focus of data sharing is on how to take advantage of account parameters of sharers so as to solve the security issues due to secret key loss. Theoretical analysis shows that the proposed identity certificate and shared key generation schemes contribute to users' privacy.